Goto

Collaborating Authors

 international aaai conference


Predicting Social Media Engagement from Emotional and Temporal Features

Kim, Yunwoo, Hwang, Junhyuk

arXiv.org Artificial Intelligence

We present a machine learning approach for predicting social media engagement (comments and likes) from emotional and temporal features. The dataset contains 600 songs with annotations for valence, arousal, and related sentiment metrics. A multi target regression model based on HistGradientBoostingRegressor is trained on log transformed engagement ratios to address skewed targets. Performance is evaluated with both a custom order of magnitude accuracy and standard regression metrics, including the coefficient of determination (R^2). Results show that emotional and temporal metadata, together with existing view counts, predict future engagement effectively. The model attains R^2 = 0.98 for likes but only R^2 = 0.41 for comments. This gap indicates that likes are largely driven by readily captured affective and exposure signals, whereas comments depend on additional factors not represented in the current feature set.


Exploring Self-Identified Counseling Expertise in Online Support Forums

Lahnala, Allison, Zhao, Yuntian, Welch, Charles, Kummerfeld, Jonathan K., An, Lawrence, Resnicow, Kenneth, Mihalcea, Rada, Pérez-Rosas, Verónica

arXiv.org Artificial Intelligence

A growing number of people engage in online health forums, making it important to understand the quality of the advice they receive. In this paper, we explore the role of expertise in responses provided to help-seeking posts regarding mental health. We study the differences between (1) interactions with peers; and (2) interactions with self-identified mental health professionals. First, we show that a classifier can distinguish between these two groups, indicating that their language use does in fact differ. To understand this difference, we perform several analyses addressing engagement aspects, including whether their comments engage the support-seeker further as well as linguistic aspects, such as dominant language and linguistic style matching. Our work contributes toward the developing efforts of understanding how health experts engage with health information- and support-seekers in social networks. More broadly, it is a step toward a deeper understanding of the styles of interactions that cultivate supportive engagement in online communities.


Machine Understanding of Scientific Language

Wright, Dustin

arXiv.org Artificial Intelligence

Scientific information expresses human understanding of nature. This knowledge is largely disseminated in different forms of text, including scientific papers, news articles, and discourse among people on social media. While important for accelerating our pursuit of knowledge, not all scientific text is faithful to the underlying science. As the volume of this text has burgeoned online in recent years, it has become a problem of societal importance to be able to identify the faithfulness of a given piece of scientific text automatically. This thesis is concerned with the cultivation of datasets, methods, and tools for machine understanding of scientific language, in order to analyze and understand science communication at scale. To arrive at this, I present several contributions in three areas of natural language processing and machine learning: automatic fact checking, learning with limited data, and scientific text processing. These contributions include new methods and resources for identifying check-worthy claims, adversarial claim generation, multi-source domain adaptation, learning from crowd-sourced labels, cite-worthiness detection, zero-shot scientific fact checking, detecting exaggerated scientific claims, and modeling degrees of information change in science communication. Critically, I demonstrate how the research outputs of this thesis are useful for effectively learning from limited amounts of scientific text in order to identify misinformative scientific statements and generate new insights into the science communication process


Analyzing the Evolution of Graphs and Texts

Guo, Xingzhi

arXiv.org Artificial Intelligence

With the recent advance of representation learning algorithms on graphs (e.g., DeepWalk/GraphSage) and natural languages (e.g., Word2Vec/BERT) , the state-of-the art models can even achieve human-level performance over many downstream tasks, particularly for the task of node and sentence classification. However, most algorithms focus on large-scale models for static graphs and text corpus without considering the inherent dynamic characteristics or discovering the reasons behind the changes. This dissertation aims to efficiently model the dynamics in graphs (such as social networks and citation graphs) and understand the changes in texts (specifically news titles and personal biographies). To achieve this goal, we utilize the renowned Personalized PageRank algorithm to create effective dynamic network embeddings for evolving graphs. Our proposed approaches significantly improve the running time and accuracy for both detecting network abnormal intruders and discovering entity meaning shifts over large-scale dynamic graphs. For text changes, we analyze the post-publication changes in news titles to understand the intents behind the edits and discuss the potential impact of titles changes from information integrity perspective. Moreover, we investigate self-presented occupational identities in Twitter users' biographies over five years, investigating job prestige and demographics effects in how people disclose jobs, quantifying over-represented jobs and their transitions over time.


A Survey on Automatic Credibility Assessment of Textual Credibility Signals in the Era of Large Language Models

Srba, Ivan, Razuvayevskaya, Olesya, Leite, João A., Moro, Robert, Schlicht, Ipek Baris, Tonelli, Sara, García, Francisco Moreno, Lottmann, Santiago Barrio, Teyssou, Denis, Porcellini, Valentin, Scarton, Carolina, Bontcheva, Kalina, Bielikova, Maria

arXiv.org Artificial Intelligence

In the current era of social media and generative AI, an ability to automatically assess the credibility of online social media content is of tremendous importance. Credibility assessment is fundamentally based on aggregating credibility signals, which refer to small units of information, such as content factuality, bias, or a presence of persuasion techniques, into an overall credibility score. Credibility signals provide a more granular, more easily explainable and widely utilizable information in contrast to currently predominant fake news detection, which utilizes various (mostly latent) features. A growing body of research on automatic credibility assessment and detection of credibility signals can be characterized as highly fragmented and lacking mutual interconnections. This issue is even more prominent due to a lack of an up-to-date overview of research works on automatic credibility assessment. In this survey, we provide such systematic and comprehensive literature review of 175 research papers while focusing on textual credibility signals and Natural Language Processing (NLP), which undergoes a significant advancement due to Large Language Models (LLMs). While positioning the NLP research into the context of other multidisciplinary research works, we tackle with approaches for credibility assessment as well as with 9 categories of credibility signals (we provide a thorough analysis for 3 of them, namely: 1) factuality, subjectivity and bias, 2) persuasion techniques and logical fallacies, and 3) claims and veracity). Following the description of the existing methods, datasets and tools, we identify future challenges and opportunities, while paying a specific attention to recent rapid development of generative AI.


Personality Analysis for Social Media Users using Arabic language and its Effect on Sentiment Analysis

Dandash, Mokhaiber, Asadpour, Masoud

arXiv.org Artificial Intelligence

Social media is heading toward personalization more and more, where individuals reveal their beliefs, interests, habits, and activities, simply offering glimpses into their personality traits. This study, explores the correlation between the use of Arabic language on twitter, personality traits and its impact on sentiment analysis. We indicated the personality traits of users based on the information extracted from their profile activities, and the content of their tweets. Our analysis incorporated linguistic features, profile statistics (including gender, age, bio, etc.), as well as additional features like emoticons. To obtain personality data, we crawled the timelines and profiles of users who took the 16personalities test in Arabic on 16personalities.com. Our dataset comprised 3,250 users who shared their personality results on twitter. We implemented various machine learning techniques, to reveal personality traits and developed a dedicated model for this purpose, achieving a 74.86% accuracy rate with BERT, analysis of this dataset proved that linguistic features, profile features and derived model can be used to differentiate between different personality traits. Furthermore, our findings demonstrated that personality affect sentiment in social media. This research contributes to the ongoing efforts in developing robust understanding of the relation between human behaviour on social media and personality features for real-world applications, such as political discourse analysis, and public opinion tracking.


Community Needs and Assets: A Computational Analysis of Community Conversations

Chowdhury, Md Towhidul Absar, Sharma, Naveen, KhudaBukhsh, Ashiqur R.

arXiv.org Artificial Intelligence

A community needs assessment is a tool used by non-profits and government agencies to quantify the strengths and issues of a community, allowing them to allocate their resources better. Such approaches are transitioning towards leveraging social media conversations to analyze the needs of communities and the assets already present within them. However, manual analysis of exponentially increasing social media conversations is challenging. There is a gap in the present literature in computationally analyzing how community members discuss the strengths and needs of the community. To address this gap, we introduce the task of identifying, extracting, and categorizing community needs and assets from conversational data using sophisticated natural language processing methods. To facilitate this task, we introduce the first dataset about community needs and assets consisting of 3,511 conversations from Reddit, annotated using crowdsourced workers. Using this dataset, we evaluate an utterance-level classification model compared to sentiment classification and a popular large language model (in a zero-shot setting), where we find that our model outperforms both baselines at an F1 score of 94% compared to 49% and 61% respectively. Furthermore, we observe through our study that conversations about needs have negative sentiments and emotions, while conversations about assets focus on location and entities. The dataset is available at https://github.com/towhidabsar/CommunityNeeds.


Reports of the Workshops Held at the 2023 International AAAI Conference on Web and Social Media

Interactive AI Magazine

The Workshop Program of the Association of the Advancement of Artificial Intelligence's 17th Conference on Web and Social Media (ICWSM-23) was held in Limassol, Cyprus from June 5-8. There were six workshops in the program: Disrupt, Ally, Resist, Embrace (DARE): Action Items for Computational Social Scientists in a Changing World, Images in Online Political Communication (PhoMemes 2023), Data for the Wellbeing of Most Vulnerable, Novel Evaluation Approaches for Text Classification Systems (NEATCLasS), Mediate 2023: News Media and Computational Journalism, and TrueHealth 2023: Combating Health Misinformation for Social Well-being. In the past decade, many sophisticated AI-powered tools have been developed and released to the scientific community and the public at large. At the same time, the socio-technical platforms that are at the center of our observations have transformed in unanticipated ways. Many of these developments have occurred against a backdrop of political and social polarization, and, public health and macroeconomic crises, which offer multiple lenses to contextualize (or distort) scientific reflexivity.


How to Do Things without Words: Modeling Semantic Drift of Emoji

Arviv, Eyal, Tsur, Oren

arXiv.org Artificial Intelligence

Emoji have become a significant part of our informal textual communication. Previous work addressing the societal and linguistic functions of emoji overlook the evolving meaning of the symbol. This evolution could be addressed through the framework of semantic drifts. In this paper we model and analyze the semantic drift of emoji and discuss the features that may be contributing to the drift, some are unique to emoji and some are more general.


Kwak

AAAI Conferences

In this work, we analyze more than two million news photos published in January 2016. We demonstrate i) which objects appear the most in news photos; ii) what the sentiments of news photos are; iii) whether the sentiment of news photos is aligned with the tone of the text; iv) how gender is treated; and v) how differently political candidates are portrayed. To our best knowledge, this is the first large-scale study of news photo contents using deep learning-based vision APIs.